skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Hsu, Justin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Backward error analysisoffers a method for assessing the quality of numerical programs in the presence of floating-point rounding errors. However, techniques from the numerical analysis literature for quantifying backward error require substantial human effort, and there are currently no tools or automated methods for statically deriving sound backward error bounds. To address this gap, we propose Bean, a typed first-order programming language designed to express quantitative bounds on backward error. Bean’s type system combines a graded coeffect system with strict linearity to soundly track the flow of backward error through programs. We prove the soundness of our system using a novel categorical semantics, where every Bean program denotes a triple of related transformations that together satisfy a backward error guarantee. To illustrate Bean’s potential as a practical tool for automated backward error analysis, we implement a variety of standard algorithms from numerical linear algebra in Bean, establishing fine-grained backward error bounds via typing in a compositional style. We also develop a prototype implementation of Bean that infers backward error bounds automatically. Our evaluation shows that these inferred bounds match worst-case theoretical relative backward error bounds from the literature, underscoring Bean’s utility in validating a key property of numerical programs:numerical stability. 
    more » « less
    Free, publicly-accessible full text available June 10, 2026
  2. Free, publicly-accessible full text available December 3, 2025
  3. Envy-free cake-cutting protocols procedurally divide an infinitely divisible good among a set of agents so that no agent prefers another's allocation to their own. These protocols are highly complex and difficult to prove correct. Recently, Bertram, Levinson, and Hsu introduced a language called Slice for describing and verifying cake-cutting protocols. Slice programs can be translated to formulas encoding envy-freeness, which are solved by SMT. While Slice works well on smaller protocols, it has difficulty scaling to more complex cake-cutting protocols. We improve Slice in two ways. First, we show any protocol execution in Slice can be replicated using piecewise uniform valuations. We then reduce Slice's constraint formulas to formulas within the theory of linear real arithmetic, showing that verifying envy-freeness is efficiently decidable. Second, we design and implement a linear type system which enforces that no two agents receive the same part of the good. We implement our methods and verify a range of challenging examples, including the first nontrivial four-agent protocol. 
    more » « less
  4. Algorithms operating on real numbers are implemented as floating-point computations in practice, but floatingpoint operations introduceroundoff errorsthat can degrade the accuracy of the result. We propose Λ num , a functional programming language with a type system that can express quantitative bounds on roundoff error. Our type system combines a sensitivity analysis, enforced through a linear typing discipline, with a novel graded monad to track the accumulation of roundoff errors. We prove that our type system is sound by relating the denotational semantics of our language to the exact and floating-point operational semantics. To demonstrate our system, we instantiate Λ num with error metrics proposed in the numerical analysis literature and we show how to incorporate rounding operations that faithfully model aspects of the IEEE 754 floating-point standard. To show that Λ num can be a useful tool for automated error analysis, we develop a prototype implementation for Λ num that infers error bounds that are competitive with existing tools, while often running significantly faster. Finally, we consider semantic extensions of our graded monad to bound error under more complex rounding behaviors, such as non-deterministic and randomized rounding. 
    more » « less
  5. Microservice-based application deployments need to administer safety properties while serving requests. However, today such properties can be specified only in limited ways that can lead to overly permissive policies and the potential for illegitimate flow of information across microservices, or ad hoc policy implementations. We argue that a range of use cases require safety properties for the flow of requests across the whole microservice network, rather than only between adjacent hops. To begin specifying such expressive policies, we propose a system for declaring and deploying service tree policies. These policies are compiled down into declarative filters that are inserted into microservice deployment manifests. We use a light-weight dynamic monitor based enforcement mechanism, using ideas from automata theory. Experiments with our preliminary prototype show that we can capture a wide class of policies that we describe as case studies. 
    more » « less
  6. Rehof, Jakob (Ed.)
    The logic of Dependence and Independence Bunched Implications (DIBI) is a logic to reason about conditional independence (CI); for instance, DIBI formulas can characterise CI in discrete probability distributions and in relational databases, using a probabilistic DIBI model and a similarly-constructed relational model. Despite the similarity of the two models, there lacks a uniform account. As a result, the laborious case-by-case verification of the frame conditions required for constructing new models hinders them from generalising the results to CI in other useful models such that continuous distribution. In this paper, we develop an abstract framework for systematically constructing DIBI models, using category theory as the unifying mathematical language. We show that DIBI models arise from arbitrary symmetric monoidal categories with copy-discard structure. In particular, we use string diagrams - a graphical presentation of monoidal categories - to give a uniform definition of the parallel composition and subkernel relation in DIBI models. Our approach not only generalises known models, but also yields new models of interest and reduces properties of DIBI models to structures in the underlying categories. Furthermore, our categorical framework enables a comparison between string diagrammatic approaches to CI in the literature and a logical notion of CI, defined in terms of the satisfaction of specific DIBI formulas. We show that the logical notion is an extension of string diagrammatic CI under reasonable conditions. 
    more » « less
  7. We propose a symbolic execution method for programs that can draw random samples. In contrast to existing work, our method can verify randomized programs with unknown inputs and can prove probabilistic properties that universally quantify over all possible inputs. Our technique augments standard symbolic execution with a new class of probabilistic symbolic variables , which represent the results of random draws, and computes symbolic expressions representing the probability of taking individual paths. We implement our method on top of the KLEE symbolic execution engine alongside multiple optimizations and use it to prove properties about probabilities and expected values for a range of challenging case studies written in C++, including Freivalds’ algorithm, randomized quicksort, and a randomized property-testing algorithm for monotonicity. We evaluate our method against Psi, an exact probabilistic symbolic inference engine, and Storm, a probabilistic model checker, and show that our method significantly outperforms both tools. 
    more » « less
  8. Modern programmable network switches can implement cus- tom applications using efficient packet processing hardware, and the programming language P4 provides high-level con- structs to program such switches. The increase in speed and programmability has inspired research in dataplane program- ming, where many complex functionalities, e.g., key-value stores and load balancers, can be implemented entirely in network switches. However, dataplane programs may suffer from novel security errors that are not traditionally found in network switches. To address this issue, we present a new information-flow control type system for P4. We formalize our type system in a recently-proposed core version of P4, and we prove a sound- ness theorem: well-typed programs satisfy non-interference. We also implement our type system in a tool, P4BID, which extends the type checker in the p4c compiler, the reference compiler for the latest version of P4. We present several case studies showing that natural security, integrity, and isolation properties in networks can be captured by non-interference, and our type system can detect violations of these properties while certifying correct programs. 
    more » « less
  9. null (Ed.)
  10. null (Ed.)